22 research outputs found

    Context Aware Road-user Importance Estimation (iCARE)

    Full text link
    Road-users are a critical part of decision-making for both self-driving cars and driver assistance systems. Some road-users, however, are more important for decision-making than others because of their respective intentions, ego vehicle's intention and their effects on each other. In this paper, we propose a novel architecture for road-user importance estimation which takes advantage of the local and global context of the scene. For local context, the model exploits the appearance of the road users (which captures orientation, intention, etc.) and their location relative to ego-vehicle. The global context in our model is defined based on the feature map of the convolutional layer of the module which predicts the future path of the ego-vehicle and contains rich global information of the scene (e.g., infrastructure, road lanes, etc.), as well as the ego vehicle's intention information. Moreover, this paper introduces a new data set of real-world driving, concentrated around inter-sections and includes annotations of important road users. Systematic evaluations of our proposed method against several baselines show promising results.Comment: Published in: IEEE Intelligent Vehicles (IV), 201

    Attention estimation by simultaneous analysis of viewer and view

    Get PDF
    Abstract — This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the driver’s gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time. I

    Understanding head and hand activities and coordination in naturalistic driving videos

    Full text link
    Abstract — In this work, we propose a vision-based analysis framework for recognizing in-vehicle activities such as interac-tions with the steering wheel, the instrument cluster and the gear. The framework leverages two views for activity analysis, a camera looking at the driver’s hand and another looking at the driver’s head. The techniques proposed can be used by researchers in order to extract ‘mid-level ’ information from video, which is information that represents some semantic understanding of the scene but may still require an expert in order to distinguish difficult cases or leverage the cues to perform drive analysis. Unlike such information, ’low-level’ video is large in quantity and can’t be used unless processed entirely by an expert. This work can apply to minimizing manual labor so that researchers may better benefit from the accessibility of the data and provide them with the ability to perform larger-scaled studies. I

    Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

    Full text link
    Automotive systems provide a unique opportunity for mobile vision technologies to improve road safety by un-derstanding and monitoring the driver. In this work, we propose a real-time framework for early detection of driver maneuvers. The implications of this study would allow for better behavior prediction, and therefore the development of more efficient advanced driver assistance and warning systems. Cues are extracted from an array of sensors ob-serving the driver (head, hand, and foot), the environment (lane and surrounding vehicles), and the ego-vehicle state (speed, steering angle, etc.). Evaluation is performed on a real-world dataset with overtaking maneuvers, showing promising results. In order to gain better insight into the processes that characterize driver behavior, temporally dis-criminative cues are studied and visualized. 1

    Visual Attention from Dynamic Analysis of Head, Eyes and Salient Objects

    No full text
    Face of a person conveys a wealth of information about his /her attentive state. Particularly, head and eyes have the potential to derive where and at what the person is looking. Since humans primarily attend to objects of interest, knowledge of salient objects in the surrounding region can help to accurately infer the focus of visual attention of the person. We present novel computational frameworks and systems to infer visual attention by analyzing dynamics of head, eyes and salient objects. We evaluate proposed systems in intelligent automobile spaces with an emphasis on accurate, robust and continuous performance in the naturalistic driving condition

    Speech based emotion classification framework for driver assistance system

    No full text
    Abstract — Automated analysis of human affective behavior has attracted increasing attention in recent years. Driver’s emotion often influences driving performance which can be improved if the car actively responds to the emotional state of the driver. It is important for an intelligent driver support system to accurately monitor the driver’s state in an unobtrusive and robust manner. Ever changing environment while driving poses a serious challenge to existing techniques for speech emotion recognition. In this paper, we utilize contextual information of the outside environment as well as inside car user to improve the emotion recognition accuracy. In particular, a noise cancellation technique is used to suppress the noise adaptively based on the driving context and a gender based context information is analyzed for developing the classifier. Experimental analyses show promising results. Index Terms — Emotion recognition; vocal expression; affective computing; affect analysis; context analysis I
    corecore